Simulating Stochastic Processes#

1. Discrete time Markov Chain with finite state space#

Let \(\pi_t\) be the probability distribution at time t where \(t = 0, 1, 2, \dots\). Let \(\mathbb{P}\) be the \(n\times n\) one step transition probability matrix. By the Komolgorov-Chapman theorem, \(\pi_{t+1} = \pi_t\mathbb{P}\) and

\[\pi_t = \pi_0\mathbb{P}^t\]

The limiting probability \(\pi_\infty = \pi_\infty\mathbb{P}\). Thus, \(\pi_\infty\) is the eigen vector of \(\mathbb{P}^{'}\) for eigen value = 1.

Specify the states of the process#

states = c("ATLANTA","CHICAGO","DC","NYC")
nstates = length(states)

Initial Probability at time 0#

pi_0 = c(1,0,0,0)

Transition Probability Matrix#

tranP = matrix(runif(nstates*nstates),nrow=nstates,ncol=nstates)
for(i in 1:nstates) tranP[i,] = tranP[i,]/sum(tranP[i,])

The Probability distribution at time n#

ntime = 20
pi_n = matrix(0,nrow=ntime,ncol=nstates)
tranP_n = diag(nstates)

for(n in 1:ntime){
    tranP_n = tranP %*% tranP_n
    pi_n[n,] = pi_0 %*% tranP_n
}

print("the probablity distribution at time n")
pi_n

print("the n-step transition probability matrix")
tranP_n
[1] "the probablity distribution at time n"
A matrix: 20 × 4 of type dbl
0.41531280.109839980.090424710.3844226
0.33637790.068795440.234453680.3603730
0.32869730.071173480.245733530.3543956
0.32814370.071194610.247697700.3529640
0.32814630.071262310.247813160.3527783
0.32815500.071268420.247821030.3527556
0.32815690.071269480.247819870.3527537
0.32815720.071269560.247819580.3527536
0.32815730.071269560.247819530.3527536
0.32815730.071269560.247819520.3527536
0.32815730.071269560.247819520.3527536
0.32815730.071269560.247819520.3527536
0.32815730.071269560.247819520.3527536
0.32815730.071269560.247819520.3527536
0.32815730.071269560.247819520.3527536
0.32815730.071269560.247819520.3527536
0.32815730.071269560.247819520.3527536
0.32815730.071269560.247819520.3527536
0.32815730.071269560.247819520.3527536
0.32815730.071269560.247819520.3527536
[1] "the n-step transition probability matrix"
A matrix: 4 × 4 of type dbl
0.32815730.071269560.24781950.3527536
0.32815730.071269560.24781950.3527536
0.32815730.071269560.24781950.3527536
0.32815730.071269560.24781950.3527536

Limiting probability distribution#

pi = eigen(t(tranP))$vector[,1]
pi = pi/sum(pi)
pi
  1. 0.328157274996289+0i
  2. 0.0712695617263433+0i
  3. 0.247819519069673+0i
  4. 0.352753644207694+0i

Simulation#

nsim = 50
x = rep(0,nsim)
x[1] = sample(1:nstates,1,prob = pi_0)
for(i in 2:nsim){
    x[i] = sample(1:nstates,1,prob = tranP[x[i-1],])
}

freq = table(states[x])
freq/sum(freq)

plot(1:nsim, x, xlab="time", ylab="state", type="l")
                  
ATLANTA CHICAGO      DC     NYC 
   0.32    0.04    0.30    0.34 
_images/markov_13_1.png

Animation#

data = rbind(c(1,1),c(2,2),c(2,0),c(3,1))
data = data[x,]
library(plotly)

df <- data.frame(
  x = data[,1], 
  y = data[,2], 
  f = 1:nsim,
  s = states[x]
)

fig <- df %>%
  plot_ly(
    x = ~x,
    y = ~y,
    frame = ~f,
    type = 'scatter',
    mode = 'markers',
    color = I("blue"),
    size = I(50)
  )

t <- list(
  family = "sans serif",
  size = 18,
  color = toRGB("red"))

fig <- fig %>% add_text(x = data[,1],y=data[,2],text = states[x],textfont=t,textposition="top right")
fig <- fig %>% layout(xaxis = list(range = c(0, 4)), yaxis = list(range = c(0,3)), showlegend = FALSE)

fig
Loading required package: ggplot2
Attaching package: 'plotly'
The following object is masked from 'package:ggplot2':

    last_plot
The following object is masked from 'package:stats':

    filter
The following object is masked from 'package:graphics':

    layout

2. Ransom walks#

A random walk is a Markov chain with infinite state space. Let \(\pi_t\) be the probability distribution at time \(t\). Let be the \(n\times n\) one step transition probability matrix \(\mathbb{P}\). The Kolmogorov-Chapman theorem still holds, i.e.,

\[\pi_{t+1} = \pi_t\mathbb{P}\]

This indicates that \(\pi_{t+1}^i = \pi_t^{i-1}\times q + \pi_t^{i+1}\times p\)

Initial probability distribution#

x[1] = 0

Transition probability#

tranP = function(current,p){
    return(ifelse(runif(1)<=p,current-1,current+1))
}

y = 1:10000
for(i in 1:10000) y[i] = tranP(3,0.5)
sum(y==2)
5009

Simulation#

nsim = 1000
t = 100
p = 0.5
x = matrix(0, nsim, t)
for(j in 1:nsim){
    for(i in 2:t){
        x[j,i] = tranP(x[j,i-1],p)
    }
}
hist(x[,100])
_images/markov_24_0.png